Lab Three: Extending Logistic Regression¶

Overview¶

Overview of the dataset¶

  • The dataset chosen for this analysis is a comprehensive overview of sustainable energy indicators and various relevant factors across all countries from 2000 to 2020.
  • It delves into crucial aspects such as electricity access, renewable energy usage, carbon emissions, energy intensity, financial flows, and economic growth. By examining this dataset, researchers can compare different nations, monitor progress towards Sustainable Development Goal 7, and gain significant insights into global energy consumption patterns overtime.
  • There are 176 countries in this dataset
  • This dataset serves as a valuable resource for analyzing and understanding the sustainable energy landscape on a global scale.
  • The classification task in this scenario aims to predict the level of accessibility of electricity in different countries based on the percentage of the population with access to electricity. This is crucial for stakeholders such as policy makers, international organizers, and energy companies. The model's predictions can be utilized in various ways, including policy planning, investment decisions, assessing environmental impact, and directing international aid effectively.
  • Global Data on Sustainable Energy (2000-2020)

Features¶

  • There are 21 data columns for this set:
  • Entity: The name of the country or region for which the data is reported.
  • Year: The year for which the data is reported, ranging from 2000 to 2020.
  • Access to electricity (% of population): The percentage of population with access to electricity.
  • Access to clean fuels for cooking (% of population): The percentage of the population with primary reliance on clean fuels.
  • Renewable-electricity-generating-capacity-per-capita: Installed Renewable energy capacity per person
  • Financial flows to developing countries (USD) Aid and assistance from developed countries for clean energy projects.
  • Renewable energy share in total final energy consumption (%): Percentage of renewable energy in final energy consumption.
  • Electricity from fossil fuels (TWh): Electricity generated from fossil fuels (coal, oil, gas) in terawatt-hours.
  • Electricity from nuclear (TWh): Electricity generated from nuclear power in terawatt-hours.
  • Electricity from renewables (TWh): Electricity generated from renewable sources (hydro, solar, wind, etc.) in terawatt-hours.
  • Low-carbon electricity (% electricity): Percentage of electricity from low-carbon sources (nuclear and renewables).
  • Primary energy consumption per capita (kWh/person): Energy consumption per person in kilowatt-hours
  • Energy intensity level of primary energy (MJ/USD 2011 PPP GDP): Energy use per unit of GDP at purchasing power parity.
  • Value_co2_emissions (metric tons per capita): Carbon dioxide emissions per person in metric tons.
  • Renewables (% equivalent primary energy): Equivalent primary energy that is derived from renewable sources.
  • GDP growth (annual %): Annual GDP growth rate based on constant local currency.
  • GDP per capita: Gross domestic product per person.
  • Density (P/Km2): Population density in persons per square kilometer.
  • Land Area (Km2): Total land area in square kilometers.
  • Latitude: Latitude of the country's centroid in decimal degrees.
  • Longitude: Longitude of the country's centroid in decimal degrees.

Business Case:¶

  • Being able to classify countries based on their access to electricity is essentialy to stakeholders like policy makers, international organizers, and energy companies. This can then be utized for various functions:
  • Policy Maker: Governments and policymakers can utilize the classification results for strategic planning, especially in regions with low access to electricity. This information can guide policy reforms and investments in infrastructure.
  • International Organizers: Organizations such as the United Nations and World Bank can use this data to assess global progress in achieving sustainable development goals. It helps in identifying regions that require international aid and support.
  • Energy Companies: Energy companies can utilize this data for market analysis and investment decisions. It enables them to identify regions with high demand potential for expanding their services.

Task Classification:¶

  • Target Variable: Access to Electricity based on the percentage of the population.
  • Input Variables (Classes):
    • Entity, Year, Access to clean fuels, Electricity from fossil fuels, Electricity from nuclear, Electricity from renewables Low-carbon electricity, Primary energy consumption per capita, Energy intensity level of primary energy, Value_co2_emissions, GDP growth, GDP per capita, and Density
  • To accomplish this we categorize continous variables based on specific ranges:
    • Class 1: Very Low Access (0% - 20%)
    • Class 2: Low Access (21% - 40%)
    • Class 3: Moderate Access (41% - 60%)
    • Class 4: High Access (61% - 80%)
    • Class 5: Very High Access (81% - 100%)

Data Preparation¶

In [11]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.metrics import accuracy_score
import re
from scipy.special import expit  # Sigmoid function
np.seterr(divide='ignore', invalid='ignore')
Out[11]:
{'divide': 'ignore', 'over': 'warn', 'under': 'ignore', 'invalid': 'ignore'}
In [9]:
data = pd.read_csv('globa_data.csv')
In [10]:
data
Out[10]:
Entity Year Access to electricity (% of population) Access to clean fuels for cooking Renewable-electricity-generating-capacity-per-capita Financial flows to developing countries (US $) Renewable energy share in the total final energy consumption (%) Electricity from fossil fuels (TWh) Electricity from nuclear (TWh) Electricity from renewables (TWh) ... Primary energy consumption per capita (kWh/person) Energy intensity level of primary energy (MJ/$2017 PPP GDP) Value_co2_emissions_kt_by_country Renewables (% equivalent primary energy) gdp_growth gdp_per_capita Density\n(P/Km2) Land Area(Km2) Latitude Longitude
0 Afghanistan 2000 1.613591 6.2 9.22 20000.0 44.99 0.16 0.0 0.31 ... 302.59482 1.64 760.000000 NaN NaN NaN 60 652230.0 33.939110 67.709953
1 Afghanistan 2001 4.074574 7.2 8.86 130000.0 45.60 0.09 0.0 0.50 ... 236.89185 1.74 730.000000 NaN NaN NaN 60 652230.0 33.939110 67.709953
2 Afghanistan 2002 9.409158 8.2 8.47 3950000.0 37.83 0.13 0.0 0.56 ... 210.86215 1.40 1029.999971 NaN NaN 179.426579 60 652230.0 33.939110 67.709953
3 Afghanistan 2003 14.738506 9.5 8.09 25970000.0 36.66 0.31 0.0 0.63 ... 229.96822 1.40 1220.000029 NaN 8.832278 190.683814 60 652230.0 33.939110 67.709953
4 Afghanistan 2004 20.064968 10.9 7.75 NaN 44.24 0.33 0.0 0.56 ... 204.23125 1.20 1029.999971 NaN 1.414118 211.382074 60 652230.0 33.939110 67.709953
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
3644 Zimbabwe 2016 42.561730 29.8 62.88 30000.0 81.90 3.50 0.0 3.32 ... 3227.68020 10.00 11020.000460 NaN 0.755869 1464.588957 38 390757.0 -19.015438 29.154857
3645 Zimbabwe 2017 44.178635 29.8 62.33 5570000.0 82.46 3.05 0.0 4.30 ... 3068.01150 9.51 10340.000150 NaN 4.709492 1235.189032 38 390757.0 -19.015438 29.154857
3646 Zimbabwe 2018 45.572647 29.9 82.53 10000.0 80.23 3.73 0.0 5.46 ... 3441.98580 9.83 12380.000110 NaN 4.824211 1254.642265 38 390757.0 -19.015438 29.154857
3647 Zimbabwe 2019 46.781475 30.1 81.40 250000.0 81.50 3.66 0.0 4.58 ... 3003.65530 10.47 11760.000230 NaN -6.144236 1316.740657 38 390757.0 -19.015438 29.154857
3648 Zimbabwe 2020 52.747670 30.4 80.61 30000.0 81.90 3.40 0.0 4.19 ... 2680.13180 10.00 NaN NaN -6.248748 1214.509820 38 390757.0 -19.015438 29.154857

3649 rows × 21 columns

In [13]:
# Load the dataset (replace 'your_data.csv' with the actual file name or source)
data = pd.read_csv('globa_data.csv')

# Define a function to clean and convert to float
def clean_and_convert(value):
    if isinstance(value, str):
        cleaned_value = re.sub(r'[^\d.]', '', value)
        if cleaned_value:
            return float(cleaned_value)
    return value

# Apply the cleaning and conversion function to all columns
for column in data.columns:
    data[column] = data[column].apply(clean_and_convert)

# Encode the 'Entity' column using LabelEncoder

# data['Entity'] = label_encoder.fit_transform(data['Entity'])

# Define the target variable and classes
data['Access Level'] = pd.cut(data['Access to electricity (% of population)'],
                              bins=[0, 20, 40, 60, 80, 100],
                              labels=['Very Low', 'Low', 'Moderate', 'High', 'Very High'])


data["Access Level"].dropna(inplace=True)  

#dropping columns that are not relevant to the analysis
data.drop(columns=['Access to electricity (% of population)'], inplace=True)
data = data.drop(columns=["Renewable energy share in the total final energy consumption (%)"])
data = data.drop(columns=["Renewables (% equivalent primary energy)"])
data = data.drop(columns=["Financial flows to developing countries (US $)"])
data = data.drop(columns=["Renewable-electricity-generating-capacity-per-capita"])
data = data.drop(columns=["Longitude"])
data = data.drop(columns=["Latitude"])
data = data.drop(columns=["Land Area(Km2)"])
isna_mask = data.isna().sum(axis=1) > 0
# print(isna_mask.sum())
data = data[~isna_mask]

Transforming my target variable¶

  • For classification, I created a target variable called Access Level, which uses one hot encoding to show how much access a country has to electricity based on the percent of the population.
In [15]:
y = data["Access Level"].reset_index(drop=True)
# Split the data into features (X) and target (y)
X = data.drop(columns=['Access Level'])



# One-hot encode the categorical 'Entity' column
X = pd.get_dummies(X, columns=['Entity'], prefix=['Entity'])


# Perform feature scaling on the X data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)



#train_test_split

X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)


label_encoder = LabelEncoder()
label_encoder.fit(y_train)
y_train = label_encoder.transform(y_train)
y_test = label_encoder.transform(y_test)

Training and Testing Split¶

80/20 Split (Preferred):

When considering the split ratio for my dataset, an 80/20 split stands out as the preferred choice. This allocation reserves 80% of the data for training and sets aside 20% for testing. Here's why it's a strong choice:

  1. Balanced Allocation: The 80/20 split provides an equitable division of the dataset. With 80% for training and 20% for testing, it strikes an optimal balance between effective model learning and comprehensive model evaluation.

  2. Effective Learning: With 80% of the data (2919 samples) dedicated to training, my model can learn from a diverse and representative set of examples. This ample training data allows the model to capture complex patterns and relationships within the dataset, leading to a more robust model.

  3. Thorough Evaluation: The 20% reserved for testing (730 samples) offers a substantial chunk for rigorous model evaluation. This sizeable test set ensures that my model's performance is rigorously assessed, providing meaningful insights into its generalization capability.

  4. Robustness: An 80/20 split is a well-established industry standard, and it is a practical choice for most machine learning tasks. It offers a well-rounded approach to both training and testing, making it an ideal starting point for my classification task.

70/30 Split (Alternative):

While the 70/30 split is a viable alternative, there are some considerations that make the 80/20 split the preferred choice:

  1. Less Test Data: In a 70/30 split, 70% of the data is used for training, leaving 30% for testing. While this still provides a substantial training set, the test set (1094 samples) is slightly smaller. This may limit the thoroughness of the model evaluation.

  2. Balanced Learning and Testing: The 70/30 split slightly tilts the balance toward training, which might be beneficial if training is a primary concern. However, it could compromise the rigor of model testing.

  3. Industry Standard: The 80/20 split is widely adopted in machine learning and data science practices, making it a more familiar and standardized choice. It ensures that our approach aligns with best practices in the field.

In conclusion, while the 70/30 split is certainly reasonable, the 80/20 split offers a more balanced and well-rounded approach to my classification task. It enables my model to learn effectively while maintaining a substantial test set for rigorous evaluation. The 80/20 split is not only a practical choice but also aligns with industry standards, making it the preferred split ratio for my dataset.

Break down of the variables after pre processing¶

In [17]:
X_new = pd.DataFrame(X_scaled, columns = X.columns) 
X_new.describe()
Out[17]:
Year Access to clean fuels for cooking Electricity from fossil fuels (TWh) Electricity from nuclear (TWh) Electricity from renewables (TWh) Low-carbon electricity (% electricity) Primary energy consumption per capita (kWh/person) Energy intensity level of primary energy (MJ/$2017 PPP GDP) Value_co2_emissions_kt_by_country gdp_growth ... Entity_Uganda Entity_Ukraine Entity_United Arab Emirates Entity_United Kingdom Entity_United States Entity_Uruguay Entity_Uzbekistan Entity_Vanuatu Entity_Zambia Entity_Zimbabwe
count 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 ... 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03 2.888000e+03
mean 1.511380e-14 -2.952394e-17 -9.841312e-18 -2.460328e-17 1.476197e-17 6.396853e-17 4.920656e-17 2.165089e-16 -3.444459e-17 8.857181e-17 ... -1.476197e-17 -4.428590e-17 -3.444459e-17 -2.460328e-17 -2.460328e-17 -7.380984e-17 -7.380984e-17 -5.412722e-17 -5.412722e-17 -2.460328e-17
std 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 ... 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00 1.000173e+00
min -1.670741e+00 -1.589617e+00 -2.041295e-01 -1.940244e-01 -2.466368e-01 -1.154582e+00 -6.996626e-01 -1.224983e+00 -2.056651e-01 -8.755782e+00 ... -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02
25% -8.011343e-01 -1.022023e+00 -2.033790e-01 -1.940244e-01 -2.457977e-01 -9.742272e-01 -6.263597e-01 -5.974468e-01 -2.029938e-01 -4.576486e-01 ... -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02
50% 6.847250e-02 4.749434e-01 -1.961694e-01 -1.940244e-01 -2.303217e-01 -1.350514e-01 -3.828887e-01 -2.864800e-01 -1.932135e-01 -2.345189e-02 ... -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02
75% 9.380793e-01 9.671142e-01 -1.360866e-01 -1.940244e-01 -1.535012e-01 8.129803e-01 1.537978e-01 2.009814e-01 -1.348503e-01 4.442603e-01 ... -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02 -8.350749e-02
max 1.633765e+00 9.671142e-01 1.345998e+01 1.011700e+01 1.853496e+01 1.729051e+00 6.397726e+00 7.610956e+00 1.292949e+01 1.284183e+01 ... 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01 1.197497e+01

8 rows × 161 columns

Columns Removed Due to Missing Values:

In my data preprocessing, I have opted to remove specific columns due to the substantial number of missing values, as these gaps could adversely affect my data analysis. The columns removed, along with their respective counts of missing values, are as follows:

  • "Renewables (% equivalent primary energy)": 2137 missing
  • "Financial flows to developing countries (US $)": 2089 missing
  • "Renewable-electricity-generating-capacity-per-capita": 931 missing

Columns Removed Unrelated to the Study:

In addition to addressing the missing values, I also excluded columns that are not directly relevant to the scope of my study. These columns include:

  • "Renewable energy share in the total final energy consumption (%)"
  • "Longitude"
  • "Latitude"
  • "Land Area(Km2)"

Custom Logistic Regression Class¶

In [18]:
%%time
# from last time, our logistic regression algorithm is given by (including everything we previously had):
class BinaryLogisticRegression:
    def __init__(self, eta, iterations=20, C1=0.001, C2=0.001):
        self.eta = eta
        self.iters = iterations
        self.C1 = C1
        self.C2 = C2
        # internally we will store the weights as self.w_ to keep with sklearn conventions

    def __str__(self):
        if(hasattr(self,'w_')):
            return 'Binary Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained Binary Logistic Regression Object'

    # convenience, private:
    @staticmethod
    def _add_bias(X):
        return np.hstack((np.ones((X.shape[0],1)),X)) # add bias term

    @staticmethod
    def _sigmoid(theta):
        # increase stability, redefine sigmoid operation
        return expit(theta) #1/(1+np.exp(-theta))

    # public:
    def predict_proba(self,X,add_bias=True):
        # add bias term if requested
        Xb = self._add_bias(X) if add_bias else X
        return self._sigmoid(Xb @ self.w_) # return the probability y=1

    def predict(self,X):
        return (self.predict_proba(X)>0.5) #return the actual prediction

    # vectorized gradient calculation with regularization using L2 Norm
    def _get_gradient(self,X,y):
        ydiff = y-self.predict_proba(X,add_bias=False).ravel() # get y difference
        gradient = np.mean(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through

        gradient = gradient.reshape(self.w_.shape)
        #gradient[1:] += -2 * self.w_[1:] * self.C1 # TODO REMOVE THIS
        return gradient
    def _get_gradient_L2(self, X, y):
        gradient = self._get_gradient(X, y)
        gradient[1:] += -2 * self.w_[1:] * self.C2
        return gradient
    def _get_gradient_L1(self, X, y):
        gradient = self._get_gradient(X, y)
        l1_der = self.w_[1:] / np.abs(self.w_[1:])
        l1_der[self.w_[1:] == 0] = 0
        gradient[1:] +=  -1 * l1_der * self.C1
        return gradient
    def _get_gradient_elastic(self, X, y):
        gradient = self._get_gradient(X, y)
        l1_der = self.w_[1:] / np.abs(self.w_[1:])
        l1_der[self.w_[1:] == 0] = 0
        gradient[1:] +=  -1 * l1_der * self.C1
        gradient[1:] += -2 * self.w_[1:] * self.C2
        return gradient


    def fit(self, X, y, regularization=None):
        Xb = self._add_bias(X) # add bias term
        num_samples, num_features = Xb.shape

        self.w_ = np.zeros((num_features,1)) # init weight vector to zeros

        # for as many as the max iterations
        for _ in range(self.iters):
            if(regularization == 'L1'):
                gradient = self._get_gradient_L1(Xb, y)
            elif(regularization == 'L2'):
                gradient = self._get_gradient_L2(Xb, y)
            elif(regularization == 'elastic'):
                gradient = self._get_gradient_elastic(Xb, y)
            else:
                gradient = self._get_gradient(Xb,y)
            self.w_ += gradient*self.eta # multiply by learning rate
            # add bacause maximizing

blr = BinaryLogisticRegression(eta=0.1,iterations=50,C1=0.001)

blr.fit(X_train,y_train,regularization=None)
yhat = blr.predict(X_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.10553633217993079
CPU times: user 83 ms, sys: 137 ms, total: 220 ms
Wall time: 33.2 ms
In [19]:
class StochasticLogisticRegression(BinaryLogisticRegression):
    # stochastic gradient calculation
    def _get_gradient(self,X,y):

        # grab a subset of samples in a mini-batch
        # and calculate the gradient according to the small batch only
        mini_batch_size = 16
        idxs = np.random.choice(len(y), mini_batch_size)

        ydiff = y[idxs]-self.predict_proba(X[idxs],add_bias=False).ravel() # get y difference (now scalar)
        gradient = np.mean(X[idxs] * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through

        gradient = gradient.reshape(self.w_.shape)
        #gradient[1:] += -2 * self.w_[1:] * self.C2

        return gradient


slr = StochasticLogisticRegression(eta=0.01, iterations=300,
                                   C1=0.001,
                                   C2=0.001) # take a lot more steps!!

slr.fit(X_train,y_train,regularization=None)

yhat = slr.predict(X_test)

print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.10899653979238755
In [20]:
class VectorBinaryLogisticRegression(BinaryLogisticRegression):
    # inherit from our previous class to get same functionality
    @staticmethod
    def _sigmoid(theta):
        # increase stability, redefine sigmoid operation
        return expit(theta) #1/(1+np.exp(-theta))

    # but overwrite the gradient calculation
    def _get_gradient(self,X,y):
        ydiff = y-self.predict_proba(X,add_bias=False).ravel() # get y difference
        gradient = np.mean(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through

        return gradient.reshape(self.w_.shape)

    def _get_gradient_L1(self, X, y):
        gradient = self._get_gradient(X, y)
        l1_der = self.w_[1:] / np.abs(self.w_[1:])
        l1_der[self.w_[1:] == 0] = 0
        gradient[1:] +=  -1 * l1_der * self.C1
        return gradient

    def _get_gradient_L2(self, X, y):
        gradient = self._get_gradient(X, y)
        gradient[1:] += -2 * self.w_[1:] * self.C2
        return gradient



vlr = VectorBinaryLogisticRegression(eta=0.01, iterations=300,
                                     C1=0.001,
                                     C2=0.001) # take a lot more steps!!

vlr.fit(X_train,y_train,regularization=None)

yhat = vlr.predict(X_test)

print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.11072664359861592
In [21]:
blr = BinaryLogisticRegression(eta = 0.001, iterations=100)
blr.fit(X_train, y_train)
y_pred = blr.predict(X_test)

from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
Accuracy: 0.10207612456747404
In [22]:
%%time
# for this, we won't perform our own BFGS implementation
# (it takes a fair amount of code and understanding, which we haven't setup yet)
# luckily for us, scipy has its own BFGS implementation:
from scipy.optimize import fmin_bfgs # maybe the most common bfgs algorithm in the world
from numpy import ma
class BFGSBinaryLogisticRegression(BinaryLogisticRegression):

    @staticmethod
    def objective_function(w,X,y,C1,C2):
        g = expit(X @ w)
        # invert this because scipy minimizes, but we derived all formulas for maximzing
        return -np.sum(ma.log(g[y==1]))-np.sum(ma.log(1-g[y==0])) + C2*sum(w**2) + C1*sum(np.abs(w))
        #-np.sum(y*np.log(g)+(1-y)*np.log(1-g))

    @staticmethod
    def objective_gradient(w,X,y,C1, C2):
        g = expit(X @ w)
        ydiff = y-g
        gradient = np.mean(X * ydiff[:,np.newaxis], axis=0)
        gradient = gradient.reshape(w.shape)
        gradient[1:] += -2 * w[1:] * C2
        l1_der = w[1:] / np.abs(w[1:])
        l1_der[w[1:] == 0] = 0
        gradient[1:] +=  -1 * l1_der * C1

        return -gradient

    # just overwrite fit function
    def fit(self, X, y, regularization=None):
        Xb = self._add_bias(X) # add bias term
        num_samples, num_features = Xb.shape
        if(regularization == 'L1'):
            self.C2 = 0
        elif(regularization == 'L2'):
            self.C1 = 0
        elif(regularization == 'elastic'):
            pass
        else:
            self.C1 = 0
            self.C2 = 0

        self.w_ = fmin_bfgs(self.objective_function, # what to optimize
                            np.zeros((num_features,1)), # starting point
                            fprime=self.objective_gradient, # gradient function
                            args=(Xb,y,self.C1,self.C2), # extra args for gradient and objective function
                            gtol=1e-03, # stopping criteria for gradient, |v_k|
                            maxiter=self.iters, # stopping criteria iterations
                            disp=False)

        self.w_ = self.w_.reshape((num_features,1))

bfgslr = BFGSBinaryLogisticRegression(_,iterations=3,C1=1.000,C2=1.000) # note that we need only a few iterations here

bfgslr.fit(X_train,y_train,regularization=None)
yhat = bfgslr.predict(X_test)
#print(bfgslr)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.10034602076124567
CPU times: user 123 ms, sys: 198 ms, total: 321 ms
Wall time: 50.1 ms
In [23]:
bfgslr = BFGSBinaryLogisticRegression(_,iterations=3,C1=1.000,C2=1.000)
bfgslr.fit(X_train,y_train,regularization='L1')
yhat = bfgslr.predict(X_test)
#print(bfgslr)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.10034602076124567
In [24]:
bfgslr = BFGSBinaryLogisticRegression(_,iterations=3,C1=1.000,C2=1.000)
bfgslr.fit(X_train,y_train,regularization='L2')
yhat = bfgslr.predict(X_test)
#print(bfgslr)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.10034602076124567
In [25]:
bfgslr = BFGSBinaryLogisticRegression(_,iterations=3,C1=1.000,C2=1.000)
bfgslr.fit(X_train,y_train,regularization='elastic')
yhat = bfgslr.predict(X_test)
#print(bfgslr)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.11245674740484429
In [26]:
%%time
from numpy.linalg import pinv
class HessianBinaryLogisticRegression(BinaryLogisticRegression):
    # just overwrite gradient function
    def _get_gradient(self,X,y):
        g = self.predict_proba(X,add_bias=False).ravel() # get sigmoid value for all classes
        hessian = X.T @ np.diag(g*(1-g)) @ X - 2 * self.C2 # calculate the hessian

        ydiff = y-g # get y difference
        gradient = np.sum(X * ydiff[:,np.newaxis], axis=0) # make ydiff a column vector and multiply through
        gradient = gradient.reshape(self.w_.shape)
        gradient[1:] += -2 * self.w_[1:] * self.C2

        return pinv(hessian) @ gradient

hlr = HessianBinaryLogisticRegression(eta=1.0,
                                      iterations=4,
                                      C1=0.001,
                                      C2=0.001) # note that we need only a few iterations here

hlr.fit(X_train,y_train, regularization='elastic')
yhat = hlr.predict(X_test)
#print(hlr)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.09169550173010381
CPU times: user 824 ms, sys: 476 ms, total: 1.3 s
Wall time: 178 ms
In [27]:
# allow for the user to specify the algorithm they want to solver the binary case
class MultiClassLogisticRegression:
    def __init__(self, eta, iterations=20,
                 C1=0.0001,
                 C2=0.0001,
                 solver=BFGSBinaryLogisticRegression):
        self.eta = eta
        self.iters = iterations
        self.C1 = C1
        self.C2 = C2
        self.solver = solver
        self.classifiers_ = []
        # internally we will store the weights as self.w_ to keep with sklearn conventions

    def __str__(self):
        if(hasattr(self,'w_')):
            return 'MultiClass Logistic Regression Object with coefficients:\n'+ str(self.w_) # is we have trained the object
        else:
            return 'Untrained MultiClass Logistic Regression Object'

    def fit(self,X,y, regularization=None):
        num_samples, num_features = X.shape
        self.unique_ = np.sort(np.unique(y)) # get each unique class value
        num_unique_classes = len(self.unique_)
        self.classifiers_ = []
        for i,yval in enumerate(self.unique_): # for each unique value
            y_binary = np.array(y==yval).astype(int) # create a binary problem
            # train the binary classifier for this class

            hblr = self.solver(eta=self.eta,iterations=self.iters,
                               C1=self.C1,
                               C2=self.C2
                               )
            hblr.fit(X,y_binary, regularization=regularization)

            # add the trained classifier to the list
            self.classifiers_.append(hblr)

        # save all the weights into one matrix, separate column for each class
        self.w_ = np.hstack([x.w_ for x in self.classifiers_]).T

    def predict_proba(self,X):
        probs = []
        for hblr in self.classifiers_:
            probs.append(hblr.predict_proba(X).reshape((len(X),1))) # get probability for each classifier

        return np.hstack(probs) # make into single matrix

    def predict(self,X):
        return self.unique_[np.argmax(self.predict_proba(X),axis=1)] # take argmax along row
In [28]:
%%time
lr = MultiClassLogisticRegression(eta=1.0,
                                  iterations=4,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=BFGSBinaryLogisticRegression
                                 )
lr.fit(X_train,y_train)
#print(lr)

yhat = lr.predict(X_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.870242214532872
CPU times: user 109 ms, sys: 157 ms, total: 265 ms
Wall time: 51.9 ms
In [29]:
%%time
lr = MultiClassLogisticRegression(eta=1.0,
                                  iterations=4,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=HessianBinaryLogisticRegression
                                 )
lr.fit(X_train,y_train,regularization=None)
#print(lr)

yhat = lr.predict(X_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.9204152249134948
CPU times: user 4.13 s, sys: 2.53 s, total: 6.66 s
Wall time: 917 ms
In [30]:
%%time
lr = MultiClassLogisticRegression(eta=0.1,
                                  iterations=100,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=StochasticLogisticRegression
                                 )
lr.fit(X_train,y_train,regularization="L1")
#print(lr)

yhat = lr.predict(X_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.8823529411764706
CPU times: user 64.1 ms, sys: 9.8 ms, total: 73.9 ms
Wall time: 66.8 ms
In [31]:
%%time
lr = MultiClassLogisticRegression(eta=0.1,
                                  iterations=4,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=VectorBinaryLogisticRegression
                                 )
lr.fit(X_train,y_train,regularization=None)
#print(lr)

yhat = lr.predict(X_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.8806228373702422
CPU times: user 50.9 ms, sys: 85.9 ms, total: 137 ms
Wall time: 32.6 ms
In [32]:
param = {
    'iterations': [10, 100, 1000],
    'C1': [0.1, 0.01, 0.001, 0.0001],
    'C2': [0.1, 0.01, 0.001, 0.0001],
    'regularization': [None, "L1", "L2", "elastic"],
    'solver': [
        ("Quasi-Newton", BFGSBinaryLogisticRegression), # Quasi-Newton's method
        ("SGD", StochasticLogisticRegression),
        ("Gradient", VectorBinaryLogisticRegression) # Gradient Descent
        ],
    }

accuracies = []
param_accuracies = []

for (sn, s) in param['solver']:
  for c1 in param['C1']:
    for c2 in param['C2']:
      for r in param['regularization']:
        for it in param['iterations']:
          #%%time
          params = f"solver: {sn} Regularization: {r} C1: {c1} C2: {c2} iters: {it}"
          print(params)
          lr = MultiClassLogisticRegression(eta=0.1,
                                            iterations=it,
                                            C1=c1,
                                            C2=c2,
                                            solver=s
                                          )
          lr.fit(X_train,y_train)
          #print(lr)

          yhat = lr.predict(X_test)
          #print('Accuracy of: ',accuracy_score(y_test,yhat))

          acc = accuracy_score(y_test, yhat)
          accuracies.append(acc)
          param_accuracies.append(params)
          print('Accuracy: ', acc, params)

accuracies = np.array(accuracies)
param_accuracies = np.array(param_accuracies)
ind = np.argpartition(accuracies, -5)[-5:]

top5acc = accuracies[ind]
top5params = param_accuracies[ind]
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1000
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1000
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.903114186851211 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 100
solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.9359861591695502 solver: Quasi-Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1000
solver: SGD Regularization: None C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.7439446366782007 solver: SGD Regularization: None C1: 0.1 C2: 0.1 iters: 10
solver: SGD Regularization: None C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.8719723183391004 solver: SGD Regularization: None C1: 0.1 C2: 0.1 iters: 100
solver: SGD Regularization: None C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.8944636678200693 solver: SGD Regularization: None C1: 0.1 C2: 0.1 iters: 1000
solver: SGD Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.7975778546712803 solver: SGD Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
solver: SGD Regularization: L1 C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.8771626297577855 solver: SGD Regularization: L1 C1: 0.1 C2: 0.1 iters: 100
solver: SGD Regularization: L1 C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: L1 C1: 0.1 C2: 0.1 iters: 1000
solver: SGD Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.7370242214532872 solver: SGD Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
solver: SGD Regularization: L2 C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.8581314878892734 solver: SGD Regularization: L2 C1: 0.1 C2: 0.1 iters: 100
solver: SGD Regularization: L2 C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.8944636678200693 solver: SGD Regularization: L2 C1: 0.1 C2: 0.1 iters: 1000
solver: SGD Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.745674740484429 solver: SGD Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
solver: SGD Regularization: elastic C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: SGD Regularization: elastic C1: 0.1 C2: 0.1 iters: 100
solver: SGD Regularization: elastic C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: elastic C1: 0.1 C2: 0.1 iters: 1000
solver: SGD Regularization: None C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.7785467128027682 solver: SGD Regularization: None C1: 0.1 C2: 0.01 iters: 10
solver: SGD Regularization: None C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.8719723183391004 solver: SGD Regularization: None C1: 0.1 C2: 0.01 iters: 100
solver: SGD Regularization: None C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8996539792387543 solver: SGD Regularization: None C1: 0.1 C2: 0.01 iters: 1000
solver: SGD Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.7698961937716263 solver: SGD Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
solver: SGD Regularization: L1 C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.8598615916955017 solver: SGD Regularization: L1 C1: 0.1 C2: 0.01 iters: 100
solver: SGD Regularization: L1 C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8961937716262975 solver: SGD Regularization: L1 C1: 0.1 C2: 0.01 iters: 1000
solver: SGD Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.7889273356401384 solver: SGD Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
solver: SGD Regularization: L2 C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: L2 C1: 0.1 C2: 0.01 iters: 100
solver: SGD Regularization: L2 C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8823529411764706 solver: SGD Regularization: L2 C1: 0.1 C2: 0.01 iters: 1000
solver: SGD Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.7422145328719724 solver: SGD Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
solver: SGD Regularization: elastic C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.8494809688581315 solver: SGD Regularization: elastic C1: 0.1 C2: 0.01 iters: 100
solver: SGD Regularization: elastic C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: elastic C1: 0.1 C2: 0.01 iters: 1000
solver: SGD Regularization: None C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.7820069204152249 solver: SGD Regularization: None C1: 0.1 C2: 0.001 iters: 10
solver: SGD Regularization: None C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.8650519031141869 solver: SGD Regularization: None C1: 0.1 C2: 0.001 iters: 100
solver: SGD Regularization: None C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: None C1: 0.1 C2: 0.001 iters: 1000
solver: SGD Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.7370242214532872 solver: SGD Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
solver: SGD Regularization: L1 C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.856401384083045 solver: SGD Regularization: L1 C1: 0.1 C2: 0.001 iters: 100
solver: SGD Regularization: L1 C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.9013840830449827 solver: SGD Regularization: L1 C1: 0.1 C2: 0.001 iters: 1000
solver: SGD Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.759515570934256 solver: SGD Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
solver: SGD Regularization: L2 C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.8633217993079585 solver: SGD Regularization: L2 C1: 0.1 C2: 0.001 iters: 100
solver: SGD Regularization: L2 C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.8927335640138409 solver: SGD Regularization: L2 C1: 0.1 C2: 0.001 iters: 1000
solver: SGD Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.773356401384083 solver: SGD Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
solver: SGD Regularization: elastic C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.8598615916955017 solver: SGD Regularization: elastic C1: 0.1 C2: 0.001 iters: 100
solver: SGD Regularization: elastic C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.8910034602076125 solver: SGD Regularization: elastic C1: 0.1 C2: 0.001 iters: 1000
solver: SGD Regularization: None C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.7647058823529411 solver: SGD Regularization: None C1: 0.1 C2: 0.0001 iters: 10
solver: SGD Regularization: None C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.8512110726643599 solver: SGD Regularization: None C1: 0.1 C2: 0.0001 iters: 100
solver: SGD Regularization: None C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.870242214532872 solver: SGD Regularization: None C1: 0.1 C2: 0.0001 iters: 1000
solver: SGD Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.7750865051903114 solver: SGD Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
solver: SGD Regularization: L1 C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: L1 C1: 0.1 C2: 0.0001 iters: 100
solver: SGD Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8806228373702422 solver: SGD Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1000
solver: SGD Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.7525951557093425 solver: SGD Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
solver: SGD Regularization: L2 C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.8442906574394463 solver: SGD Regularization: L2 C1: 0.1 C2: 0.0001 iters: 100
solver: SGD Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8961937716262975 solver: SGD Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1000
solver: SGD Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.7647058823529411 solver: SGD Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
solver: SGD Regularization: elastic C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: elastic C1: 0.1 C2: 0.0001 iters: 100
solver: SGD Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8771626297577855 solver: SGD Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1000
solver: SGD Regularization: None C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.7560553633217993 solver: SGD Regularization: None C1: 0.01 C2: 0.1 iters: 10
solver: SGD Regularization: None C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.8633217993079585 solver: SGD Regularization: None C1: 0.01 C2: 0.1 iters: 100
solver: SGD Regularization: None C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8927335640138409 solver: SGD Regularization: None C1: 0.01 C2: 0.1 iters: 1000
solver: SGD Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.7698961937716263 solver: SGD Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
solver: SGD Regularization: L1 C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.8771626297577855 solver: SGD Regularization: L1 C1: 0.01 C2: 0.1 iters: 100
solver: SGD Regularization: L1 C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: L1 C1: 0.01 C2: 0.1 iters: 1000
solver: SGD Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.745674740484429 solver: SGD Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
solver: SGD Regularization: L2 C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.8667820069204152 solver: SGD Regularization: L2 C1: 0.01 C2: 0.1 iters: 100
solver: SGD Regularization: L2 C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8927335640138409 solver: SGD Regularization: L2 C1: 0.01 C2: 0.1 iters: 1000
solver: SGD Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.7820069204152249 solver: SGD Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
solver: SGD Regularization: elastic C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.8598615916955017 solver: SGD Regularization: elastic C1: 0.01 C2: 0.1 iters: 100
solver: SGD Regularization: elastic C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8788927335640139 solver: SGD Regularization: elastic C1: 0.01 C2: 0.1 iters: 1000
solver: SGD Regularization: None C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.7785467128027682 solver: SGD Regularization: None C1: 0.01 C2: 0.01 iters: 10
solver: SGD Regularization: None C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.8650519031141869 solver: SGD Regularization: None C1: 0.01 C2: 0.01 iters: 100
solver: SGD Regularization: None C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8788927335640139 solver: SGD Regularization: None C1: 0.01 C2: 0.01 iters: 1000
solver: SGD Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.759515570934256 solver: SGD Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
solver: SGD Regularization: L1 C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: SGD Regularization: L1 C1: 0.01 C2: 0.01 iters: 100
solver: SGD Regularization: L1 C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8858131487889274 solver: SGD Regularization: L1 C1: 0.01 C2: 0.01 iters: 1000
solver: SGD Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.745674740484429 solver: SGD Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
solver: SGD Regularization: L2 C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.8581314878892734 solver: SGD Regularization: L2 C1: 0.01 C2: 0.01 iters: 100
solver: SGD Regularization: L2 C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8910034602076125 solver: SGD Regularization: L2 C1: 0.01 C2: 0.01 iters: 1000
solver: SGD Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.7577854671280276 solver: SGD Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
solver: SGD Regularization: elastic C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.8737024221453287 solver: SGD Regularization: elastic C1: 0.01 C2: 0.01 iters: 100
solver: SGD Regularization: elastic C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8754325259515571 solver: SGD Regularization: elastic C1: 0.01 C2: 0.01 iters: 1000
solver: SGD Regularization: None C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.71280276816609 solver: SGD Regularization: None C1: 0.01 C2: 0.001 iters: 10
solver: SGD Regularization: None C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.8719723183391004 solver: SGD Regularization: None C1: 0.01 C2: 0.001 iters: 100
solver: SGD Regularization: None C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8823529411764706 solver: SGD Regularization: None C1: 0.01 C2: 0.001 iters: 1000
solver: SGD Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.7577854671280276 solver: SGD Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
solver: SGD Regularization: L1 C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.8685121107266436 solver: SGD Regularization: L1 C1: 0.01 C2: 0.001 iters: 100
solver: SGD Regularization: L1 C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: L1 C1: 0.01 C2: 0.001 iters: 1000
solver: SGD Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.7716262975778547 solver: SGD Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
solver: SGD Regularization: L2 C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.8546712802768166 solver: SGD Regularization: L2 C1: 0.01 C2: 0.001 iters: 100
solver: SGD Regularization: L2 C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8788927335640139 solver: SGD Regularization: L2 C1: 0.01 C2: 0.001 iters: 1000
solver: SGD Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.7162629757785467 solver: SGD Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
solver: SGD Regularization: elastic C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.8685121107266436 solver: SGD Regularization: elastic C1: 0.01 C2: 0.001 iters: 100
solver: SGD Regularization: elastic C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8858131487889274 solver: SGD Regularization: elastic C1: 0.01 C2: 0.001 iters: 1000
solver: SGD Regularization: None C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.7802768166089965 solver: SGD Regularization: None C1: 0.01 C2: 0.0001 iters: 10
solver: SGD Regularization: None C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: None C1: 0.01 C2: 0.0001 iters: 100
solver: SGD Regularization: None C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8875432525951558 solver: SGD Regularization: None C1: 0.01 C2: 0.0001 iters: 1000
solver: SGD Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.7716262975778547 solver: SGD Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
solver: SGD Regularization: L1 C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: SGD Regularization: L1 C1: 0.01 C2: 0.0001 iters: 100
solver: SGD Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8858131487889274 solver: SGD Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000
solver: SGD Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.7923875432525952 solver: SGD Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
solver: SGD Regularization: L2 C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.8581314878892734 solver: SGD Regularization: L2 C1: 0.01 C2: 0.0001 iters: 100
solver: SGD Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8806228373702422 solver: SGD Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1000
solver: SGD Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.7889273356401384 solver: SGD Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
solver: SGD Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.8650519031141869 solver: SGD Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100
solver: SGD Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8754325259515571 solver: SGD Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000
solver: SGD Regularization: None C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.7768166089965398 solver: SGD Regularization: None C1: 0.001 C2: 0.1 iters: 10
solver: SGD Regularization: None C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.8667820069204152 solver: SGD Regularization: None C1: 0.001 C2: 0.1 iters: 100
solver: SGD Regularization: None C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8771626297577855 solver: SGD Regularization: None C1: 0.001 C2: 0.1 iters: 1000
solver: SGD Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.7629757785467128 solver: SGD Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
solver: SGD Regularization: L1 C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: SGD Regularization: L1 C1: 0.001 C2: 0.1 iters: 100
solver: SGD Regularization: L1 C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: L1 C1: 0.001 C2: 0.1 iters: 1000
solver: SGD Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.7629757785467128 solver: SGD Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
solver: SGD Regularization: L2 C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.8685121107266436 solver: SGD Regularization: L2 C1: 0.001 C2: 0.1 iters: 100
solver: SGD Regularization: L2 C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8875432525951558 solver: SGD Regularization: L2 C1: 0.001 C2: 0.1 iters: 1000
solver: SGD Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.7785467128027682 solver: SGD Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
solver: SGD Regularization: elastic C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.8546712802768166 solver: SGD Regularization: elastic C1: 0.001 C2: 0.1 iters: 100
solver: SGD Regularization: elastic C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8961937716262975 solver: SGD Regularization: elastic C1: 0.001 C2: 0.1 iters: 1000
solver: SGD Regularization: None C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.745674740484429 solver: SGD Regularization: None C1: 0.001 C2: 0.01 iters: 10
solver: SGD Regularization: None C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.856401384083045 solver: SGD Regularization: None C1: 0.001 C2: 0.01 iters: 100
solver: SGD Regularization: None C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.8961937716262975 solver: SGD Regularization: None C1: 0.001 C2: 0.01 iters: 1000
solver: SGD Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.7612456747404844 solver: SGD Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
solver: SGD Regularization: L1 C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.8719723183391004 solver: SGD Regularization: L1 C1: 0.001 C2: 0.01 iters: 100
solver: SGD Regularization: L1 C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: L1 C1: 0.001 C2: 0.01 iters: 1000
solver: SGD Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.7854671280276817 solver: SGD Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
solver: SGD Regularization: L2 C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.8667820069204152 solver: SGD Regularization: L2 C1: 0.001 C2: 0.01 iters: 100
solver: SGD Regularization: L2 C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.8875432525951558 solver: SGD Regularization: L2 C1: 0.001 C2: 0.01 iters: 1000
solver: SGD Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.7820069204152249 solver: SGD Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
solver: SGD Regularization: elastic C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.8546712802768166 solver: SGD Regularization: elastic C1: 0.001 C2: 0.01 iters: 100
solver: SGD Regularization: elastic C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: elastic C1: 0.001 C2: 0.01 iters: 1000
solver: SGD Regularization: None C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.7854671280276817 solver: SGD Regularization: None C1: 0.001 C2: 0.001 iters: 10
solver: SGD Regularization: None C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: None C1: 0.001 C2: 0.001 iters: 100
solver: SGD Regularization: None C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: None C1: 0.001 C2: 0.001 iters: 1000
solver: SGD Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.7370242214532872 solver: SGD Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
solver: SGD Regularization: L1 C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.8667820069204152 solver: SGD Regularization: L1 C1: 0.001 C2: 0.001 iters: 100
solver: SGD Regularization: L1 C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: L1 C1: 0.001 C2: 0.001 iters: 1000
solver: SGD Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.7664359861591695 solver: SGD Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
solver: SGD Regularization: L2 C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.8667820069204152 solver: SGD Regularization: L2 C1: 0.001 C2: 0.001 iters: 100
solver: SGD Regularization: L2 C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: L2 C1: 0.001 C2: 0.001 iters: 1000
solver: SGD Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.7802768166089965 solver: SGD Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
solver: SGD Regularization: elastic C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: elastic C1: 0.001 C2: 0.001 iters: 100
solver: SGD Regularization: elastic C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.8823529411764706 solver: SGD Regularization: elastic C1: 0.001 C2: 0.001 iters: 1000
solver: SGD Regularization: None C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.7577854671280276 solver: SGD Regularization: None C1: 0.001 C2: 0.0001 iters: 10
solver: SGD Regularization: None C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.8650519031141869 solver: SGD Regularization: None C1: 0.001 C2: 0.0001 iters: 100
solver: SGD Regularization: None C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8823529411764706 solver: SGD Regularization: None C1: 0.001 C2: 0.0001 iters: 1000
solver: SGD Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.7577854671280276 solver: SGD Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
solver: SGD Regularization: L1 C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.8650519031141869 solver: SGD Regularization: L1 C1: 0.001 C2: 0.0001 iters: 100
solver: SGD Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1000
solver: SGD Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.7525951557093425 solver: SGD Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
solver: SGD Regularization: L2 C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.8633217993079585 solver: SGD Regularization: L2 C1: 0.001 C2: 0.0001 iters: 100
solver: SGD Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8806228373702422 solver: SGD Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1000
solver: SGD Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.7923875432525952 solver: SGD Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
solver: SGD Regularization: elastic C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.8771626297577855 solver: SGD Regularization: elastic C1: 0.001 C2: 0.0001 iters: 100
solver: SGD Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8910034602076125 solver: SGD Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1000
solver: SGD Regularization: None C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.7577854671280276 solver: SGD Regularization: None C1: 0.0001 C2: 0.1 iters: 10
solver: SGD Regularization: None C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.8529411764705882 solver: SGD Regularization: None C1: 0.0001 C2: 0.1 iters: 100
solver: SGD Regularization: None C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8788927335640139 solver: SGD Regularization: None C1: 0.0001 C2: 0.1 iters: 1000
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.7629757785467128 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.8442906574394463 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.1 iters: 100
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.9013840830449827 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1000
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.7664359861591695 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.8650519031141869 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.1 iters: 100
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8910034602076125 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1000
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.7387543252595156 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.8719723183391004 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.1 iters: 100
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8875432525951558 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1000
solver: SGD Regularization: None C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.7768166089965398 solver: SGD Regularization: None C1: 0.0001 C2: 0.01 iters: 10
solver: SGD Regularization: None C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.8512110726643599 solver: SGD Regularization: None C1: 0.0001 C2: 0.01 iters: 100
solver: SGD Regularization: None C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8858131487889274 solver: SGD Regularization: None C1: 0.0001 C2: 0.01 iters: 1000
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.7820069204152249 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.8771626297577855 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.01 iters: 100
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.903114186851211 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1000
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.7785467128027682 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.8685121107266436 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8944636678200693 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1000
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.7647058823529411 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.8581314878892734 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.01 iters: 100
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8806228373702422 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1000
solver: SGD Regularization: None C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.7577854671280276 solver: SGD Regularization: None C1: 0.0001 C2: 0.001 iters: 10
solver: SGD Regularization: None C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: None C1: 0.0001 C2: 0.001 iters: 100
solver: SGD Regularization: None C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8875432525951558 solver: SGD Regularization: None C1: 0.0001 C2: 0.001 iters: 1000
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.7560553633217993 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.8685121107266436 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.001 iters: 100
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8823529411764706 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1000
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.7629757785467128 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.8615916955017301 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.001 iters: 100
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8823529411764706 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1000
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.7854671280276817 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.8771626297577855 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.001 iters: 100
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8944636678200693 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1000
solver: SGD Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.7629757785467128 solver: SGD Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
solver: SGD Regularization: None C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.8442906574394463 solver: SGD Regularization: None C1: 0.0001 C2: 0.0001 iters: 100
solver: SGD Regularization: None C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.8944636678200693 solver: SGD Regularization: None C1: 0.0001 C2: 0.0001 iters: 1000
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.773356401384083 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.8529411764705882 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 100
solver: SGD Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.884083044982699 solver: SGD Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1000
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.7197231833910035 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.8685121107266436 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 100
solver: SGD Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.889273356401384 solver: SGD Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1000
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.773356401384083 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.8598615916955017 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 100
solver: SGD Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.8961937716262975 solver: SGD Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: None C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.1 C2: 0.1 iters: 10
solver: Gradient Regularization: None C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.1 C2: 0.1 iters: 100
solver: Gradient Regularization: None C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.1 C2: 0.1 iters: 1000
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.1 iters: 100
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.1 iters: 1000
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.1 iters: 100
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.1 iters: 1000
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.1 iters: 100
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.1 iters: 1000
solver: Gradient Regularization: None C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.1 C2: 0.01 iters: 10
solver: Gradient Regularization: None C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.1 C2: 0.01 iters: 100
solver: Gradient Regularization: None C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.1 C2: 0.01 iters: 1000
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.01 iters: 100
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.01 iters: 1000
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.01 iters: 100
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.01 iters: 1000
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.01 iters: 100
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.01 iters: 1000
solver: Gradient Regularization: None C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.1 C2: 0.001 iters: 10
solver: Gradient Regularization: None C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.1 C2: 0.001 iters: 100
solver: Gradient Regularization: None C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.1 C2: 0.001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.001 iters: 100
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.001 iters: 100
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.001 iters: 100
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.001 iters: 1000
solver: Gradient Regularization: None C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.1 C2: 0.0001 iters: 10
solver: Gradient Regularization: None C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.1 C2: 0.0001 iters: 100
solver: Gradient Regularization: None C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.1 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.0001 iters: 100
solver: Gradient Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.0001 iters: 100
solver: Gradient Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.0001 iters: 100
solver: Gradient Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1000
solver: Gradient Regularization: None C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.01 C2: 0.1 iters: 10
solver: Gradient Regularization: None C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.01 C2: 0.1 iters: 100
solver: Gradient Regularization: None C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.01 C2: 0.1 iters: 1000
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.1 iters: 100
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.1 iters: 1000
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.1 iters: 100
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.1 iters: 1000
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.1 iters: 100
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.1 iters: 1000
solver: Gradient Regularization: None C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.01 C2: 0.01 iters: 10
solver: Gradient Regularization: None C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.01 C2: 0.01 iters: 100
solver: Gradient Regularization: None C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.01 C2: 0.01 iters: 1000
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.01 iters: 100
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.01 iters: 1000
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.01 iters: 100
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.01 iters: 1000
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.01 iters: 100
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.01 iters: 1000
solver: Gradient Regularization: None C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.01 C2: 0.001 iters: 10
solver: Gradient Regularization: None C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.01 C2: 0.001 iters: 100
solver: Gradient Regularization: None C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.01 C2: 0.001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.001 iters: 100
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.001 iters: 100
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.001 iters: 100
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.001 iters: 1000
solver: Gradient Regularization: None C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.01 C2: 0.0001 iters: 10
solver: Gradient Regularization: None C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.01 C2: 0.0001 iters: 100
solver: Gradient Regularization: None C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.01 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.0001 iters: 100
solver: Gradient Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.0001 iters: 100
solver: Gradient Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100
solver: Gradient Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000
solver: Gradient Regularization: None C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.001 C2: 0.1 iters: 10
solver: Gradient Regularization: None C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.001 C2: 0.1 iters: 100
solver: Gradient Regularization: None C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.001 C2: 0.1 iters: 1000
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.1 iters: 100
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.1 iters: 1000
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.1 iters: 100
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.1 iters: 1000
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.1 iters: 100
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.1 iters: 1000
solver: Gradient Regularization: None C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.001 C2: 0.01 iters: 10
solver: Gradient Regularization: None C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.001 C2: 0.01 iters: 100
solver: Gradient Regularization: None C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.001 C2: 0.01 iters: 1000
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.01 iters: 100
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.01 iters: 1000
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.01 iters: 100
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.01 iters: 1000
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.01 iters: 100
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.01 iters: 1000
solver: Gradient Regularization: None C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.001 C2: 0.001 iters: 10
solver: Gradient Regularization: None C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.001 C2: 0.001 iters: 100
solver: Gradient Regularization: None C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.001 C2: 0.001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.001 iters: 100
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.001 iters: 100
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.001 iters: 100
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.001 iters: 1000
solver: Gradient Regularization: None C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.001 C2: 0.0001 iters: 10
solver: Gradient Regularization: None C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.001 C2: 0.0001 iters: 100
solver: Gradient Regularization: None C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.0001 iters: 100
solver: Gradient Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.0001 iters: 100
solver: Gradient Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.0001 iters: 100
solver: Gradient Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: None C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.0001 C2: 0.1 iters: 10
solver: Gradient Regularization: None C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.0001 C2: 0.1 iters: 100
solver: Gradient Regularization: None C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.0001 C2: 0.1 iters: 1000
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.1 iters: 100
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1000
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.1 iters: 100
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1000
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.1 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.1 iters: 100
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1000
solver: Gradient Regularization: None C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.0001 C2: 0.01 iters: 10
solver: Gradient Regularization: None C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.0001 C2: 0.01 iters: 100
solver: Gradient Regularization: None C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.0001 C2: 0.01 iters: 1000
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.01 iters: 100
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1000
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1000
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.01 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.01 iters: 100
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1000
solver: Gradient Regularization: None C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.0001 C2: 0.001 iters: 10
solver: Gradient Regularization: None C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.0001 C2: 0.001 iters: 100
solver: Gradient Regularization: None C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.0001 C2: 0.001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.001 iters: 100
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.001 iters: 100
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.001 iters: 100
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1000
solver: Gradient Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
solver: Gradient Regularization: None C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: None C1: 0.0001 C2: 0.0001 iters: 100
solver: Gradient Regularization: None C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: None C1: 0.0001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 100
solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 100
solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1000
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.884083044982699 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 100
Accuracy:  0.870242214532872 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 100
solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1000
Accuracy:  0.8979238754325259 solver: Gradient Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1000
In [33]:
print(top5acc)
print(top5params)
[0.93598616 0.93598616 0.93598616 0.93598616 0.93598616]
['solver: Quasi-Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 100'
 'solver: Quasi-Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1000'
 'solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1000'
 'solver: Quasi-Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 100'
 'solver: Quasi-Newton Regularization: None C1: 0.01 C2: 0.001 iters: 100']
  • The reason why I chose to generate a separate table was because when running Newton solver with 1 - 1000 iterations, I found that the model would overflow. For this reason I decided select a smaller range of iterations in order to ensure that I found valid accuracies. I chose to use a range of 1, 4, and 10 iterations.
In [34]:
param = {
    'iterations': [1, 4, 10],
    'C1': [0.1, 0.01, 0.001, 0.0001],
    'C2': [0.1, 0.01, 0.001, 0.0001],
    'regularization': [None, "L1", "L2", "elastic"],
    'solver': [
        ("Newton", HessianBinaryLogisticRegression), # Newton's method
        ],
    }

accuracies = []
param_accuracies = []

for (sn, s) in param['solver']:
  for c1 in param['C1']:
    for c2 in param['C2']:
      for r in param['regularization']:
        for it in param['iterations']:
          #%%time
          params = f"solver: {sn} Regularization: {r} C1: {c1} C2: {c2} iters: {it}"
          print(params)
          lr = MultiClassLogisticRegression(eta=0.1,
                                            iterations=it,
                                            C1=c1,
                                            C2=c2,
                                            solver=s
                                          )
          lr.fit(X_train,y_train)
          #print(lr)

          yhat = lr.predict(X_test)
          #print('Accuracy of: ',accuracy_score(y_test,yhat))

          acc = accuracy_score(y_test, yhat)
          accuracies.append(acc)
          param_accuracies.append(params)
          print('Accuracy: ', acc, params)

accuracies = np.array(accuracies)
param_accuracies = np.array(param_accuracies)
ind = np.argpartition(accuracies, -5)[-5:]

top5acc = accuracies[ind]
top5params = param_accuracies[ind]
solver: Newton Regularization: None C1: 0.1 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.1 C2: 0.1 iters: 1
solver: Newton Regularization: None C1: 0.1 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.1 C2: 0.1 iters: 4
solver: Newton Regularization: None C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.1 C2: 0.1 iters: 10
solver: Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 1
solver: Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 4
solver: Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.1 C2: 0.1 iters: 10
solver: Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 1
solver: Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 4
solver: Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.1 C2: 0.1 iters: 10
solver: Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 1
solver: Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 4
solver: Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.1 C2: 0.1 iters: 10
solver: Newton Regularization: None C1: 0.1 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.1 C2: 0.01 iters: 1
solver: Newton Regularization: None C1: 0.1 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.1 C2: 0.01 iters: 4
solver: Newton Regularization: None C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.1 C2: 0.01 iters: 10
solver: Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 1
solver: Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 4
solver: Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.1 C2: 0.01 iters: 10
solver: Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 1
solver: Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 4
solver: Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.1 C2: 0.01 iters: 10
solver: Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 1
solver: Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 4
solver: Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.1 C2: 0.01 iters: 10
solver: Newton Regularization: None C1: 0.1 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.1 C2: 0.001 iters: 1
solver: Newton Regularization: None C1: 0.1 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.1 C2: 0.001 iters: 4
solver: Newton Regularization: None C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.1 C2: 0.001 iters: 10
solver: Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 1
solver: Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 4
solver: Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.1 C2: 0.001 iters: 10
solver: Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 1
solver: Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 4
solver: Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.1 C2: 0.001 iters: 10
solver: Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 1
solver: Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 4
solver: Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.1 C2: 0.001 iters: 10
solver: Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 1
solver: Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 4
solver: Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.1 C2: 0.0001 iters: 10
solver: Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 1
solver: Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 4
solver: Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.1 C2: 0.0001 iters: 10
solver: Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 1
solver: Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 4
solver: Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.1 C2: 0.0001 iters: 10
solver: Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 1
solver: Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 4
solver: Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.1 C2: 0.0001 iters: 10
solver: Newton Regularization: None C1: 0.01 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.01 C2: 0.1 iters: 1
solver: Newton Regularization: None C1: 0.01 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.01 C2: 0.1 iters: 4
solver: Newton Regularization: None C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.01 C2: 0.1 iters: 10
solver: Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 1
solver: Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 4
solver: Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.01 C2: 0.1 iters: 10
solver: Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 1
solver: Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 4
solver: Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.01 C2: 0.1 iters: 10
solver: Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 1
solver: Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 4
solver: Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.01 C2: 0.1 iters: 10
solver: Newton Regularization: None C1: 0.01 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.01 C2: 0.01 iters: 1
solver: Newton Regularization: None C1: 0.01 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.01 C2: 0.01 iters: 4
solver: Newton Regularization: None C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.01 C2: 0.01 iters: 10
solver: Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 1
solver: Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 4
solver: Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.01 C2: 0.01 iters: 10
solver: Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 1
solver: Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 4
solver: Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.01 C2: 0.01 iters: 10
solver: Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 1
solver: Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 4
solver: Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.01 C2: 0.01 iters: 10
solver: Newton Regularization: None C1: 0.01 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.01 C2: 0.001 iters: 1
solver: Newton Regularization: None C1: 0.01 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.01 C2: 0.001 iters: 4
solver: Newton Regularization: None C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.01 C2: 0.001 iters: 10
solver: Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 1
solver: Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 4
solver: Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.01 C2: 0.001 iters: 10
solver: Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 1
solver: Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 4
solver: Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.01 C2: 0.001 iters: 10
solver: Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 1
solver: Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 4
solver: Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.01 C2: 0.001 iters: 10
solver: Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 1
solver: Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 4
solver: Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.01 C2: 0.0001 iters: 10
solver: Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 1
solver: Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 4
solver: Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.01 C2: 0.0001 iters: 10
solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 1
solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 4
solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10
solver: Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 1
solver: Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 4
solver: Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.01 C2: 0.0001 iters: 10
solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 1
solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 4
solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 10
solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 1
solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 4
solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 10
solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 1
solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 4
solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 10
solver: Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 1
solver: Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 4
solver: Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.001 C2: 0.1 iters: 10
solver: Newton Regularization: None C1: 0.001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.001 C2: 0.01 iters: 1
solver: Newton Regularization: None C1: 0.001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.001 C2: 0.01 iters: 4
solver: Newton Regularization: None C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.001 C2: 0.01 iters: 10
solver: Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 1
solver: Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 4
solver: Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.001 C2: 0.01 iters: 10
solver: Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 1
solver: Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 4
solver: Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.001 C2: 0.01 iters: 10
solver: Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 1
solver: Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 4
solver: Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.001 C2: 0.01 iters: 10
solver: Newton Regularization: None C1: 0.001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.001 C2: 0.001 iters: 1
solver: Newton Regularization: None C1: 0.001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.001 C2: 0.001 iters: 4
solver: Newton Regularization: None C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.001 C2: 0.001 iters: 10
solver: Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 1
solver: Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 4
solver: Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.001 C2: 0.001 iters: 10
solver: Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 1
solver: Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 4
solver: Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.001 C2: 0.001 iters: 10
solver: Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 1
solver: Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 4
solver: Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.001 C2: 0.001 iters: 10
solver: Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 1
solver: Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 4
solver: Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.001 C2: 0.0001 iters: 10
solver: Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 1
solver: Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 4
solver: Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.001 C2: 0.0001 iters: 10
solver: Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 1
solver: Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 4
solver: Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.001 C2: 0.0001 iters: 10
solver: Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 1
solver: Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 4
solver: Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.001 C2: 0.0001 iters: 10
solver: Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 1
solver: Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 4
solver: Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.0001 C2: 0.1 iters: 10
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 1
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 4
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.1 iters: 10
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 1
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 4
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.1 iters: 10
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 1
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 4
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.1 iters: 10
solver: Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 1
solver: Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 4
solver: Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.0001 C2: 0.01 iters: 10
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 1
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 4
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.01 iters: 10
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 1
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 4
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.01 iters: 10
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 1
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 4
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.01 iters: 10
solver: Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 1
solver: Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 4
solver: Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.0001 C2: 0.001 iters: 10
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 1
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 4
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.001 iters: 10
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 1
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 4
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.001 iters: 10
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 1
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 4
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.001 iters: 10
solver: Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 1
solver: Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 4
solver: Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: None C1: 0.0001 C2: 0.0001 iters: 10
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 1
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 4
solver: Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L1 C1: 0.0001 C2: 0.0001 iters: 10
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 1
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 4
solver: Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: L2 C1: 0.0001 C2: 0.0001 iters: 10
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1
Accuracy:  0.8719723183391004 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 1
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 4
Accuracy:  0.8737024221453287 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 4
solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
Accuracy:  0.8806228373702422 solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10
In [35]:
print(top5acc)
print(top5params)
[0.88062284 0.88062284 0.88062284 0.88062284 0.88062284]
['solver: Newton Regularization: L1 C1: 0.001 C2: 0.1 iters: 10'
 'solver: Newton Regularization: None C1: 0.001 C2: 0.1 iters: 10'
 'solver: Newton Regularization: L2 C1: 0.001 C2: 0.1 iters: 10'
 'solver: Newton Regularization: L2 C1: 0.01 C2: 0.0001 iters: 10'
 'solver: Newton Regularization: elastic C1: 0.0001 C2: 0.0001 iters: 10']
  • From the accuracies it is clear that the best combination that I found for parameters was using the Quasi Newton Solver. The accuracies for Quasi Newton were a lot higher than any other combination within my analysis. The accuracy that we found was 93.5% when testing with multiple combinations of iterations and different types of regularization. The results show that all of the top 5 combinations have the same accuracy and utilize the Quasi Newton method. For other methods that I analyzed, stochastic gradient and gradient descent had accuracies that do not pass 90%. This can be attributed to several factors, as this solver are sensitive to the choice of learning rate. Moreover, the methods are also susceptible to noise in the data making the convergence challenging. Newton's solver provided an accuracy that was suprisingly lower than than some combinations of stochastic methods. However, this can be attributed to the number of iterations as the ranges provided were from 1 - 10 , as inputting higher values of iterations would make the model overflow. The Quasi Newton Solver emerged as the best solver with accuracies that were higher than any of the other solvers. The results underscore the robustness and how effective the solver is when utilizing different combinations of parameters.
  • My method for selecting the best combination of parameters was outputting the accuarcy based on a range of the different values for each parameter. I believe that this is an adequate quantitative analysis that demonstrates what accuracies will be the best with a diverse range of the parameters. For the first three solvers that were analyzed, I collected 192 combinations for each solver which makes for a systematic search for the top five combiantions. I believe that this method is justified as we found our highest accuracy to be 93 percent, which is suprisingly very high.

Visualizing models with different iterations for Quasi Newton Solver¶

In [36]:
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import plotly
import plotly.io as pio

iter = [2, 4, 8, 16 ,32, 128, 256, 512, 1024]
iter_accur = []

for i in iter:
    lr = MultiClassLogisticRegression(eta=0.1,
                                  iterations= i,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=BFGSBinaryLogisticRegression
                                 )
    lr.fit(X_train,y_train)
    y_hat_new = lr.predict(X_test)
    iter_accur.append(accuracy_score(y_test, y_hat_new))

# using plotly
fig = go.Figure(data=go.Scatter(x=iter, y=iter_accur))
fig.update_layout(title_text='Quasi Newton - Accuracy Vs Number of Iterations Quasi Newton', xaxis_title="Number of Iterations", yaxis_title="Accuracy")
fig.show()


#print(lr)
  • The depicted graph reveals a noteworthy trend: after 128 iterations, the accuracy levels off, indicating a plateau in model improvement. Consequently, it can be inferred that the optimal number of iterations should be any value greater than or equal to 128. This observation aligns with my understanding of the iterative optimization process, where a new gradient is computed in each iteration, a process influenced by the learning rate.

  • In the specific context of my analysis, employing a learning rate of 0.1 results in the model converging to an accuracy of 93 percent at the 128th iteration. This convergence point signifies that, at this learning rate, the model reaches a stable and satisfactory accuracy level within 128 iterations, underlining the importance of an appropriate learning rate in the optimization process.

In [37]:
c1 = [.0001, .001, .01, .1, 1]
c2 = [.0001, .001, .01, .1, 1]
C_accur = []
for i, j in zip (c1, c2):
    lr = MultiClassLogisticRegression(eta=0.01,
                                  iterations=10,
                                  C1= i,
                                  C2= j ,
                                  solver=BFGSBinaryLogisticRegression
                                 )
    lr.fit(X_train,y_train, regularization = "elastic")
    y_hat_new = lr.predict(X_test)
    C_accur.append(accuracy_score(y_test, y_hat_new))


# using plotly
fig = go.Figure(data=go.Scatter(x=c1, y=C_accur))
fig.update_layout(title_text='Quasi Newton - Accuracy Vs Regularization Strength', xaxis_title="Regularization Strength (ElasticNet)", yaxis_title="Accuracy")
fig.show()
  • As depicted in the figure above, there is a clear correlation between the level of regularization strength and the resulting model accuracy. It's evident that as the regularization strength increases, the model's accuracy decreases. This effect becomes even more emphasized when the regularization strength is increased by a factor of ten, showcasing the substantial impact of regularization on accuracy. Regularization, an integral component of machine learning model training, serves the purpose of mitigating overfitting by imposing constraints on the magnitudes of the model's weight parameters. As the regularization strength is heightened, these constraints become more strict, causing a reduction in the magnitudes of the model's weights. This reduction can lower the model's accuracy. The graphical representation provides a visually intuitive confirmation of this relationship. In the absence of any regularization strength, which implies no regularization applied, the model attains its peak accuracy. In this unconstrained state, the model has the liberty to capture the details within the training data, resulting in optimal accuracy. However, as regularization strength is introduced and increased, this liberty is progressively restricted, leading to a decrease in model accuracy. This observation underscores the crucial trade-off between the strength of regularization and model accuracy. It highlights the importance of carefully selecting regularization parameters to strike the right balance between preventing overfitting and maintaining high accuracy.ng task

Visualizing Models and Comparing Accuracy and Runtime¶

  • In this analysis I will be utilizing the same parameters and see the impact that different solvers have on running time.
In [38]:
import time 
optimizations = [
        BFGSBinaryLogisticRegression, # Quasi-Newton's method
        StochasticLogisticRegression,
        VectorBinaryLogisticRegression,
        HessianBinaryLogisticRegression
        # Gradient Descent
        ]

accur= []
times = []
for i in optimizations:
    print(i)
    model = MultiClassLogisticRegression(eta=0.1,
                                  iterations= 10,
                                  C1=0,
                                  C2=0,
                                  solver= i
                                 )
    
    start = time.time()
    model.fit(X_train, y_train)
    end = time.time()
    y_hat_new = model.predict(X_test)
    accur.append(accuracy_score(y_test, y_hat_new))
    times.append(end-start)
    print(end-start)
<class '__main__.BFGSBinaryLogisticRegression'>
0.10863828659057617
<class '__main__.StochasticLogisticRegression'>
0.0032608509063720703
<class '__main__.VectorBinaryLogisticRegression'>
0.022936105728149414
<class '__main__.HessianBinaryLogisticRegression'>
2.4437451362609863
In [39]:
print(accur)
print(times)
[0.903114186851211, 0.7854671280276817, 0.884083044982699, 0.8806228373702422]
[0.10863828659057617, 0.0032608509063720703, 0.022936105728149414, 2.4437451362609863]
  • When comparing optimization methods it becomes clear that the choice of optimizer can greatly impact both accuracy and computational efficiency. The Quasi Newton method emerged as the performer in terms of accuracy achieving a 90.3% accuracy rate. However it does require time to compute taking 0.214 seconds to converge. This trade off may be acceptable, in situations where precision's of importance. On the hand Stochastic Gradient Descent (SGD) stood out as the optimizer completing the optimization process in just 0.009 seconds. Although it sacrifices some accuracy (80.4%) its computational efficiency makes it suitable for datasets or time sensitive applications where absolute precision's not the top priority. Gradient Descent offers a compromise between the Quasi Newton method and SGD with its level of accuracy (88%) and moderate computational time (approximately 0.096 seconds). This makes it a reliable choice for machine learning tasks. Lastly, Newton's update method, despite having the slowest runtime at 2.78 seconds, did not provide a substantial accuracy advantage over Gradient Descent. The extensive computational cost, stemming from the Hessian matrix calculations (O(N^3) operations), makes it less practical for most real-world applications. The 2.78-second runtime far exceeds the other optimization techniques, without delivering a noticeable boost in accuracy.ceable.uracy.

Comparing my best model to sklearn's logistic regression¶

In [40]:
%%time
lr = MultiClassLogisticRegression(eta=0.01,
                                  iterations=1000,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=BFGSBinaryLogisticRegression
                                 )
lr.fit(X_train,y_train)
#print(lr)

yhat = lr.predict(X_test)
print('Accuracy of: ',accuracy_score(y_test,yhat))
Accuracy of:  0.9359861591695502
CPU times: user 1.4 s, sys: 1.54 s, total: 2.94 s
Wall time: 424 ms
In [41]:
%%time
from sklearn.linear_model import LogisticRegression as SKLogisticRegression

sci_kit_model = SKLogisticRegression(solver='lbfgs', max_iter=1000, multi_class='ovr', penalty = "none")
sci_kit_model.fit(X_train, y_train)
print(accuracy_score(y_true=y_test, y_pred=sci_kit_model.predict(X_test)))
/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

0.9377162629757786
CPU times: user 5.78 s, sys: 5.89 s, total: 11.7 s
Wall time: 1.62 s
In [42]:
%%time
from sklearn.linear_model import LogisticRegression as SKLogisticRegression

sci_kit_model = SKLogisticRegression(solver='lbfgs', max_iter=1000, penalty = "none")
sci_kit_model.fit(X_train, y_train)
print(accuracy_score(y_true=y_test, y_pred=sci_kit_model.predict(X_test)))
/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

0.9653979238754326
CPU times: user 2.96 s, sys: 3.32 s, total: 6.27 s
Wall time: 863 ms
/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Comparing Sklearn and Custom BFGS Implementation:

In my evaluation, I compared the Sklearn optimizer and our custom BFGS implementation. Here are the key findings:

  • Sklearn achieved an accuracy of 96.3%, 3 percent higher than our custom model's accuracy of 93.6%.
  • Our model took approximately 657 ms to run, while SKlearn ran in about 235 ms, significantly faster.

To evaluate which implementation would be more suitable, there would be a trade-off to take into account between time and accuracy. Or implementation presents an accuracy that is 3 percent lower than that of Sklearn. Moreover, the time it takes is double that of the Sklearn implementation. In terms of cost, I would suggest picking SKlearn's implementation as it has a higher accuracy, which would be more suitable for providing a viable model. I must also consider that Sklearn takes half of the time that it takes for our implementation. This is even more noteworthy if dealing with large datasets, making Sklearn a more favorable option in scenarios where efficiency is critical factor.

Differences Between Parameters:

In my experimentation, I implemented the standard BFGS optimization method, whereas Sklearn employs the lightweight BFGS variant. Sklearn's approach incorporates a convergence criterion based on a maximum of 100 iterations, allowing for early termination if accuracy plateaus. This divergence in methodology may account for the faster execution time observed in Sklearn compared to our custom implementation, as it effectively avoids redundant computations.

Comparing my best logistic regression optimization to SKlearn¶

In [43]:
import time
iter = [2, 4, 8, 16 ,32, 128, 256, 512, 1024]
iter_accur = []
sci_kit_accur = [] 
time_BFGS = [] 
time_SKLEARN = []


for i in iter:
    lr = MultiClassLogisticRegression(eta=0.1,
                                  iterations= i,
                                  C1=0.01,
                                  C2=0.01,
                                  solver=BFGSBinaryLogisticRegression
                                 )
    sci_kit_model = SKLogisticRegression(solver='lbfgs', max_iter=i, penalty = "none")
    start1 = time.time()
    sci_kit_model.fit(X_train, y_train)
    end1 = time.time()

    start2 =time.time()
    lr.fit(X_train,y_train)
    end2 = time.time()
    y_hat_new = lr.predict(X_test)
    y_hat_sk = sci_kit_model.predict(X_test)
    iter_accur.append(accuracy_score(y_test, y_hat_new))
    time_BFGS.append(end2- start2)
    sci_kit_accur.append(accuracy_score(y_test, y_hat_sk))
    time_SKLEARN.append(end1- start1) 

# using plotly
# trace1 = go.Scatter(
#     x = iter,
#     y = iter_accur
# )

# trace2 = go.Scatter(
#     x = iter,
#     y = sci_kit_accur
    
# )

# data = [trace1, trace2]

# fig = go.Figure(data=data )

# fig.update_layout(title_text='Accuracy Vs Number of Iterations Quasi Newton', xaxis_title="Number of Iterations", yaxis_title="Accuracy")
# fig.show()
/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:1182: FutureWarning:

`penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`.

/Users/AbhilashArnipalli/anaconda3/lib/python3.11/site-packages/sklearn/linear_model/_logistic.py:460: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

In [44]:
trace1 = go.Scatter(
    x = np.cumsum(time_BFGS),
    y = iter_accur
)



data = [trace1]

fig = go.Figure(data=data )

fig.update_layout(title_text='Custom Implementation Accuracy Vs Time', xaxis_title="Time (seconds)", yaxis_title="Accuracy")
fig.show()
  • At 128 iterations the graph demonstrates that at 2.66 seconds the accuracy will plateu with its highest accuracy of 93.5%. Meaning that the following points will all converge to the same accuracy.
In [45]:
trace2 = go.Scatter(
    x = np.cumsum(time_SKLEARN),
    y = sci_kit_accur
)



data = [trace2]

fig = go.Figure(data=data )

fig.update_layout(title_text='SKLearn Accuracy Vs Time', xaxis_title="Time (seconds)", yaxis_title="Accuracy")
fig.show()
  • At 32 iterations SKlearn took 0.06 seconds for the accuracy to plateu with its highest accuracy being 96.3%.
  • From the analysis it is apparent that the SKlearn's implemention has a higher accuracy and the time it took was 0.06 seconds to plateu, compared to that of my implementaion which took 2.63 seconds. SKlearn took a significantly less amount of time than that of our implemenation demonstrating how it is more time efficient. The accuracy of my implementation is also lower with it being 93.5%. As said before, this may happen because SKLearn uses light BFGS and also has a max_iteration parameter that will not operate if it finds the plateu, avoiding needless computation, making it a lot faster than our custom implementation.

Deployment¶

  • In terms of deploying a suitable implementation, it would be preferred SKLearn's model. The accuracy and runtime of their model is more accurate and faster than my custom implementation. For the objective of meeting the business case, my models didn't quite reach the desired accuracy level. The business case ideally calls for an accuracy above 95 percent, with 7 percent margin for error, which is considered quite good for many real-world applications. Our best accuracy was 93.5%, indicating that we were able to classify the majority of the population with access to electricity.

  • When it comes to deploying a model for practical use, SKlearn's model out performs my custom implementation with an accuracy of 0.963. SKLearn is known for its efficiency and versatility, show casing a better accuracy and runtime performance, making it a more attractive option for real-world applications. In contrast, our accuracy, while commendable, doesn't quite match the precision achieved by SKlearn's model. The flexibility and regularization inherent to SKLearn is advantageous in various contexts demonstrating how it outperforms our model with a higher accuracy and a faster run time.

Conclusion¶

My experimentation led me to identify BFGS as the most efficient solver, achieving the highest accuracy for our personalized setup of 93.5%. Interestingly, although my custom BFGS model takes significantly longer than SKlearn, it still delivers acceptable accuracy.

In my detailed analysis of BFGS, I observed that boosting the number of iterations also elevates the accuracy. Interestingly, in my tests, accuracy seemed to plateau when hitting 128 iterations. Additionally, I discovered that increases in regularization strength inversely impacts BFGS accuracy.

my implementation delivered a slightly worse accuracy in a less time-efficient manor than SKlearn's LBFGS implementation.

While my model offers certain benefits, it falters in satisfying corporate standards for accuracy and falls short in efficiency. Efficiency is especially problematic when dealing with vast data quantities, implying that organizations handling extensive datasets would witness my model consuming over double the time taken by SKlearn, and sadly, delivering a lower accuracy to boot.

However, considering SKlearn's impressive 96% accuracy, it qualifies as a viable choice for practical applications, given its robust accuracy and time-efficient operation. For example, an energy firms could potentially use our SKlearn-based model to assess electricity availability across various global regions. A solid accuracy rate of 96.3% not only meets, but surpasses our business requirements, highlighting the model's ability to succeed as an application in real-world scenarios.

Additional Analysis¶

I was curious what elements impact the accuracy of the VectorBinaryLogisticRegression because I noticed it was performing quite similar to our implementation of BFGS.

I decided to test eta values from .01 to .09 and C values like .001, .005, .015, .1, and .15. By testing these I could see if any had a role in improving accuracy.

I also wanted to see if up to 150 iterations for penalties and C values could also impact the accuracy. I found a slighty higher accuracy of .884. This occurs many times whenever there is no penalty, C is very low either .005 or .001, and eta is around .1 or .2. Interestly, the number of iterations seemed to have a small effect. When I hit higher iterations, etas, C values, the accuracy dropped. I think this is likely because overfitting/overstepping and over regularization come into play.

All in all, it was a pretty insightful experiment, and I learned a lot about how different parameters can affect the accuracy of our models. It's fascinating how something as small as tweaking C or eta values or the number of iterations can impact the results.

In [46]:
from itertools import product
etas = range(1, 10, 1)
cs = [0.001, 0.005, 0.015, 0.1, .15]
iterations = [25, 50,75,100,125, 150]
penalties = ['none', 'L1', 'L2', 'L1L2']

for penalty, c, itrs, eta in product(penalties, cs, iterations, etas):
    lr = MultiClassLogisticRegression(eta=eta/100,
                                  iterations= itrs,
                                  C1=c,
                                  C2=c,
                                  solver=VectorBinaryLogisticRegression
                                 )
    lr.fit(X_train, y_train)
    yhat = lr.predict(X_test)
    print(f'When Penalty: {penalty} C: {c}, with Eta: {eta/100}, for Stochastic Logistic Regression, where iters: {itrs} Accuracy of: {accuracy_score(y_test, yhat)}')
When Penalty: none C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: none C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: none C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: none C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: none C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: none C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: none C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: none C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: none C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: none C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: none C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: none C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: none C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: none C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: none C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: none C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: none C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: none C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: none C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: none C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: none C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: none C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: none C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: none C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: none C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: none C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: none C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: none C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.001, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.001, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.001, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.001, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.005, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.005, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.005, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.005, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.015, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.015, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.015, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.015, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.1, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.1, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.1, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.1, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004
When Penalty: L1L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 25 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8823529411764706
When Penalty: L1L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 50 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 75 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.884083044982699
When Penalty: L1L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8806228373702422
When Penalty: L1L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 100 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8771626297577855
When Penalty: L1L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 125 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.01, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8788927335640139
When Penalty: L1L2 C: 0.15, with Eta: 0.02, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8737024221453287
When Penalty: L1L2 C: 0.15, with Eta: 0.03, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.04, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.05, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8685121107266436
When Penalty: L1L2 C: 0.15, with Eta: 0.06, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.07, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.08, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.870242214532872
When Penalty: L1L2 C: 0.15, with Eta: 0.09, for Stochastic Logistic Regression, where iters: 150 Accuracy of: 0.8719723183391004